23 research outputs found

    An Investigation into Trust & Reputation for Agent-Based Virtual Organisations

    No full text
    Trust is a prevalent concept in human society. In essence, it concerns our reliance on the actions of our peers, and the actions of other entities within our environment. For example, we may rely on our car starting in the morning to get to work on time, and on the actions of our fellow drivers, so that we may get there safely. For similar reasons, trust is becoming increasingly important in computing, as systems, such as the Grid, require computing resources to work together seamlessly, across organisational and geographical boundaries (Foster et al., 2001). In this context, the reliability of resources in one organisation cannot be assumed from the point of view of another. Moreover, certain resources may fail more often than others, and for this reason, we argue that software systems must be able to assess the reliability of different resources, so that they may choose which resources to rely upon. With this in mind, our goal here is to develop a mechanism by which software entities can automatically assess the trustworthiness of a given entity (the trustee). In achieving this goal, we have developed a probabilistic framework for assessing trust based on observations of a trustee's past behaviour. Such observations may be accounted for either when they are made directly by the assessing party (the truster), or by a third party (reputation source). In the latter case, our mechanism can cope with the possibility that third party information is unreliable, either because the sender is lying, or because it has a different world view. In this document, we present our framework, and show how it can be applied to cases in which a trustee's actions are represented as binary events; for example, a trustee may cooperate with the truster, or it may defect. We place our work in context, by showing how it constitutes part of a system for managing coalitions of agents, operating in a grid computing environment. We then give an empirical evaluation of our method, which shows that it outperforms the most similar system in the literature, in many important scenarios

    Sequential Decision Making with Untrustworthy Service Providers

    No full text
    In this paper, we deal with the sequential decision making problem of agents operating in computational economies, where there is uncertainty regarding the trustworthiness of service providers populating the environment. Specifically, we propose a generic Bayesian trust model, and formulate the optimal Bayesian solution to the exploration-exploitation problem facing the agents when repeatedly interacting with others in such environments. We then present a computationally tractable Bayesian reinforcement learning algorithm to approximate that solution by taking into account the expected value of perfect information of an agent's actions. Our algorithm is shown to dramatically outperform all previous finalists of the international Agent Reputation and Trust (ART) competition, including the winner from both years the competition has been run

    TRAVOS: Trust and Reputation in the Context of Inaccurate Information Sources

    No full text
    In many dynamic open systems, agents have to interact with one another to achieve their goals. Here, agents may be self-interested, and when trusted to perform an action for another, may betray that trust by not performing the action as required. In addition, due to the size of such systems, agents will often interact with other agents with which they have little or no past experience. There is therefore a need to develop a model of trust and reputation that will ensure good interactions among software agents in large scale open systems. Against this background, we have developed TRAVOS (Trust and Reputation model for Agent-based Virtual OrganisationS) which models an agent's trust in an interaction partner. Specifically, trust is calculated using probability theory taking account of past interactions between agents, and when there is a lack of personal experience between agents, the model draws upon reputation information gathered from third parties. In this latter case, we pay particular attention to handling the possibility that reputation information may be inaccurate

    The ART of IAM: The Winning Strategy for the 2006 Competition

    No full text
    In many dynamic open systems, agents have to interact with one another to achieve their goals. Here, agents may be self-interested, and when trusted to perform an action for others, may betray that trust by not performing the actions as required. In addition, due to the size of such systems, agents will often interact with other agents with which they have little or no past experience. This situation has led to the development of a number of trust and reputation models, which aim to facilitate an agent's decision making in the face of uncertainty regarding the behaviour of its peers. However, these multifarious models employ a variety of different representations of trust between agents, and measure performance in many different ways. This has made it hard to adequately evaluate the relative properties of different models, raising the need for a common platform on which to compare competing mechanisms. To this end, the ART Testbed Competition has been proposed, in which agents using different trust models compete against each other to provide services in an open marketplace. In this paper, we present the winning strategy for this competition in 2006, provide an analysis of the factors that led to this success, and discuss lessons learnt from the competition about issues of trust in multiagent systems in general. Our strategy, IAM, is Intelligent (using statistical models for opponent modelling), Abstemious (spending its money parsimoniously based on its trust model) and Moral (providing fair and honest feedback to those that request it)

    Agent-Based Trust and Reputation in the Context of Inaccurate Information Sources

    Get PDF
    Trust is a prevalent concept in human society that, in essence, concerns our reliance on the actions of other entities within our environment. For example, we may rely on our car starting to get to work on time, and on our fellow drivers, so that we may get there safely. For similar reasons, trust is becoming increasingly important in computing, as systems, such as the Grid, require integration of computing resources, across organisational boundaries. In this context, the reliability of resources in one organisation cannot be assumed from the point of view of another, as certain resources may fail more often than others. For this reason, we argue that software systems must be able to assess the reliability of different resources, so that they may choose which of them to rely on. With this in mind, our goal is to develop mechanisms, or models, to aid decision making by an autonomous agent (the truster), when the consequences of its decisions depend on the actions of other agents (the trustees). To achieve this, we have developed a probabilistic framework for assessing trust based on a trustee's past behaviour, which we have instantiated through the creation of two novel trust models (TRAVOS and TRAVOS-C). These facilitate decision making in two different contexts with regard to trustee behaviour. First, using TRAVOS, a truster can make decisions in contexts where a trustee can only act in one of two ways: either it can cooperate, acting to the truster's advantage; or it can defect, thereby acting against the truster's interests. Second, using TRAVOS-C, a truster can make decisions about trustees that can act in a continuous range of ways, for example, taking into account the delivery time of a service. These models share an ability to account for observations of a trustee's behaviour, made either directly by the truster, or by a third party (reputation source). In the latter case, both models can cope with third party information that is unreliable, either because the sender is lying, or because it has a different world view. In addition, TRAVOS-C can assess a trustee for which there is little or no direct or reported experience, using information about other agents that share characteristics with the trustee. This is achieved using a probabilistic mechanism, which automatically accounts for the amount of correlation observed between agents' behaviour, in a truster's environment

    Collaborative Sensing by Unmanned Aerial Vehicles

    No full text
    In many military and civilian applications, Unmanned Aerial Vehicles (UAVs) provide an indispensable platform for gathering information about the situation on the ground. In particular, they have the potential to revolutionize the way in which information is collected, fused and disseminated. These advantages are greatly enhanced if swarms of multiple UAVs are used, since this enables the collection of data from multiple vantage points using multiple sensors. However, enhancements to overall operational performance can be realised only if the platforms have a high degree of autonomy, which is achieved through machine intelligence. With this in mind, we report on our recently launched project, SUAAVE (Sensing, Unmanned, Autonomous, Aerial VEhicles), which seeks to develop and evaluate a fully automated sensing platform consisting of multiple UAVs. To achieve this goal, we will take a multiply disciplinary approach, focusing on the complex dependencies that exist between tasks such as data fusion, ad-hoc wireless networking, and multi-agent co-ordination. In this position paper, we highlight the related work in this area and outline our agenda for future work

    Observation modelling for vision-based target search by unmanned aerial vehicles

    No full text
    Copyright Ā© 2015, International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved.Unmanned Aerial Vehicles (UAVs) are playing an increasing role in gathering information about objects on the ground. In particular, a key problem is to detect and classify objects from a sequence of camera images. However, existing systems typically adopt an idealised model of sensor observations, by assuming they are independent, and take the form of maximum likelihood predictions of an objects class. In contrast, real vision systems produce output that can be highly correlated and corrupted by noise. Therefore, traditional approaches can lead to inaccurate or overconfident results, which in turn lead to poor decisions about what to observe next to improve these predictions. To address these issues, we develop a Gaussian Process based observation model that characterises the correlation between classifier outputs as a function of UAV position. We then use this to fuse classifier observations from a sequence of images and to plan the UAVs movements. In both real and simulated target search scenarios, we show that this can achieve a decrease in mean squared detection error of up to 66% relative to existing state-of-the-art methods
    corecore